Morality in dialogue systems has raised great attention in research recently. A moral dialogue system could better connect users and enhance conversation engagement by gaining users' trust. In this paper, we propose a framework, MoralDial to train and evaluate moral dialogue systems. In our framework, we first explore the communication mechanisms of morality and resolve expressed morality into four sub-modules. The sub-modules indicate the roadmap for building a moral dialogue system. Based on that, we design a simple yet effective method: constructing moral discussions from Rules of Thumb (RoTs) between simulated specific users and the dialogue system. The constructed discussion consists of expressing, explaining, and revising the moral views in dialogue exchanges, which makes conversational models learn morality well in a natural manner. Furthermore, we propose a novel evaluation method in the framework. We evaluate the multiple aspects of morality by judging the relation between dialogue responses and RoTs in discussions, where the multifaceted nature of morality is particularly considered. Automatic and manual experiments demonstrate that our framework is promising to train and evaluate moral dialogue systems.
translated by 谷歌翻译
Large pretrained language models can easily produce toxic or biased content, which is prohibitive for practical use. In order to detect such toxic generations, existing methods rely on templates, real-world data extraction, crowdsourcing workers, or automatic generation to construct adversarial contexts that are likely to induce toxic generations. However, what type of context is more likely to induce unsafe responses is still under-explored. In this paper, we identify that context toxicity and context category (e.g., \textit{profanity}, \textit{insult}, \textit{drugs}, etc.) are two important factors to cause safety issues in response generation. Hence, we propose a method called \emph{reverse generation} to construct adversarial contexts conditioned on a given response, with the flexibility to control category, toxicity level, and inductivity of the generated contexts. Via reverse generation, we augment the existing BAD dataset and construct a new dataset BAD+ which contains more than 120K diverse and highly inductive contexts in 12 categories. We test three popular pretrained dialogue models (Blender, DialoGPT, and Plato2) and find that BAD+ can largely expose their safety problems. Furthermore, we show that BAD+ can greatly enhance the safety of generation and reveal the key factors of safety improvement. Our code and dataset is available at \url{https://github.com/thu-coai/Reverse_Generation}.
translated by 谷歌翻译
随着在线聊天的日益普及,贴纸在我们的在线沟通中变得越来越重要。在开放域对话中选择适当的贴纸需要对对话和贴纸以及两种类型的方式之间的关系有全面的了解。为了应对这些挑战,我们提出了一种由三个辅助任务组成的多任务学习方法,以增强对对话历史,情感和语义含义的理解。在最近的一个具有挑战性的数据集中进行的广泛实验表明,我们的模型可以更好地结合多模式信息,并在强质基础上获得更高的精度。消融研究进一步验证了每个辅助任务的有效性。我们的代码可在\ url {https://github.com/nonstopfor/sticker-selection}中找到
translated by 谷歌翻译
LIDC-IDRI数据库是肺癌预测的最流行的基准。但是,通过放射科医生的主观评估,LIDC中的结节可能与病理基础真理具有完全不同的恶性注释,从而引入了标签分配错误,并在培训期间引起了后续的监督偏见。因此,LIDC数据库需要更多的客观标签来基于学习的癌症预测。基于一个额外的小数据集,该数据集包含通过病理检查诊断的180个结节,我们建议重新标记LIDC数据,以减轻对此强大基准测试的原始注释偏差的影响。我们在本文中证明,基于度量学习的类似结节检索提供新标签将是一种有效的重新标记策略。对这些重新标记的LIDC结节进行的培训可改善模型性能,当添加不确定的结节的新标签时,这将增强。我们进一步推断出,重新标记的LIDC是最终的良好肺癌预测的方便方法,同时构建大型病理预处理的结节数据库提供了长期解决方案。
translated by 谷歌翻译
为了研究非对比计算断层扫描(CT)周围的胸膜,气道和血管是否可以区分良性和恶性肺结核。 LIDC-IDRI DataSet是最大的公共CT数据库之一进行了研究。共有1556例来自694名患者的结节涉及统计分析,其中平均刻录3和> 3的结节分别表示为良性和恶性肿瘤。此外,来自113例诊断患者的339个结节独立地评估了诊断原律。将计算机算法开发成肺部结构并量化胸膜表面,气道和血管的距离,以及结节附近的呼吸道和血管的计数数和归一化。进行差距(或)和Chi-Square(\ Chi ^ 2)测试以证明周围结构的特征与结节恶性肿瘤之间的相关性。在逻辑回归中进行非参数接收器操作特征(ROC)分析,以评估每个结构的辨别能力。对于良性和恶性群体,分别从结节到胸膜,气道和血管的平均距离(6.56,5.19),(37.08,26.43)和(1.42,17.07)mm。结节与通气通路的计数和血管数的相关性分别(或= 22.96,\ Chi ^ 2 = 105.04)和(或= 7.06,\ Chi ^ 2 = 290.11)。结节之间的相关性和气道和血管的体积是(或= 9.19,\ Chi ^ 2 = 159.02)和(或= 2.29,\ Chi ^ 2 = 55.89)。胸膜,呼吸道和血管的曲线下曲线(AUC)分别为0.5202,0.6943和0.6529。我们的研究结果表明,与良性的,恶性结节通常被更多的肺部结构包围,表明这些结构的特征可以被视为肺癌生物标志物。
translated by 谷歌翻译
在本文中,我们提出了一种称为Q-Vit的视觉变压器(VIT)的完全可区分的量化方法,其中两个量化标度和位宽度都是可学习的参数。具体而言,根据我们的观察,即VIT显示出不同的量化鲁棒性,我们利用头部宽度的位宽度来挤压Q-Vit的大小,同时保持性能。此外,我们提出了一种名为“可切换量表”的新技术,以解决量级和位宽度的联合训练中的收敛问题。这样,Q-Vit将VIT量化的限制推向了3位,而不会降低性能。此外,我们分析了VIT的每个体系结构成分的量化鲁棒性,并表明多头自我注意力(MSA)和高斯误差线性单元(GELU)是VIT量化的关键方面。这项研究提供了一些有关VIT量化的进一步研究的见解。在不同的VIT模型(例如DEIT和SWIN Transformer)上进行的广泛实验显示了我们量化方法的有效性。特别是,我们的方法优于最先进的统一量化方法,而Deit微型的量化方法则优于1.5%。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译